IS

Comyn-Wattiau, Isabelle

Topic Weight Topic Terms
0.310 evaluation effectiveness assessment evaluating paper objectives terms process assessing criteria evaluations methodology provides impact literature
0.214 design artifacts alternative method artifact generation approaches alternatives tool science generate set promising requirements evaluation
0.126 research information systems science field discipline researchers principles practice core methods area reference relevance conclude

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Akoka, Jacky 1 Prat, Nicolas 1
artifact evaluation 1 content analysis 1 design evaluation 1 design science system 1
design taxonomy 1

Articles (1)

A Taxonomy of Evaluation Methods for Information Systems Artifacts (Journal of Management Information Systems, 2015)
Authors: Abstract:
    Artifacts, such as software systems, pervade organizations and society. In the field of information systems (IS) they form the core of research. The evaluation of IS artifacts thus represents a major issue. Although IS research paradigms are increasingly intertwined, building and evaluating artifacts has traditionally been the purview of design science research (DSR). DSR in IS has not reached maturity yet. This is particularly true of artifact evaluation. This paper investigates the ÒwhatÓ and the ÒhowÓ of IS artifact evaluation: what are the objects and criteria of evaluation, the methods for evaluating the criteria, and the relationships between the ÒwhatÓ and the ÒhowÓ of evaluation? To answer these questions, we develop a taxonomy of evaluation methods for IS artifacts. With this taxonomy, we analyze IS artifact evaluation practice, as reflected by ten years of DSR publications in the basket of journals of the Association for Information Systems (AIS). This research brings to light important relationships between the dimensions of IS artifact evaluation, and identifies seven typical evaluation patterns: demonstration; simulation- and metric-based benchmarking of artifacts; practice-based evaluation of effectiveness; simulation- and metric-based absolute evaluation of artifacts; practice-based evaluation of usefulness or ease of use; laboratory, student-based evaluation of usefulness; and algorithmic complexity analysis. This study also reveals a focus of artifact evaluation practice on a few criteria. Beyond immediate usefulness, IS researchers are urged to investigate ways of evaluating the long-term organizational impact and the societal impact of artifacts. > >